Goto

Collaborating Authors

 visual range


Extended Visibility of Autonomous Vehicles via Optimized Cooperative Perception under Imperfect Communication

Sarlak, Ahmad, Amin, Rahul, Razi, Abolfazl

arXiv.org Artificial Intelligence

Autonomous Vehicles (AVs) rely on individual perception systems to navigate safely. However, these systems face significant challenges in adverse weather conditions, complex road geometries, and dense traffic scenarios. Cooperative Perception (CP) has emerged as a promising approach to extending the perception quality of AVs by jointly processing shared camera feeds and sensor readings across multiple vehicles. This work presents a novel CP framework designed to optimize vehicle selection and networking resource utilization under imperfect communications. Our optimized CP formation considers critical factors such as the helper vehicles' spatial position, visual range, motion blur, and available communication budgets. Furthermore, our resource optimization module allocates communication channels while adjusting power levels to maximize data flow efficiency between the ego and helper vehicles, considering realistic models of modern vehicular communication systems, such as LTE and 5G NR-V2X. We validate our approach through extensive experiments on pedestrian detection in challenging scenarios, using synthetic data generated by the CARLA simulator. The results demonstrate that our method significantly improves upon the perception quality of individual AVs with about 10% gain in detection accuracy. This substantial gain uncovers the unleashed potential of CP to enhance AV safety and performance in complex situations.


Enhanced Cooperative Perception for Autonomous Vehicles Using Imperfect Communication

Sarlak, Ahmad, Alzorgan, Hazim, Boroujeni, Sayed Pedram Haeri, Razi, Abolfazl, Amin, Rahul

arXiv.org Artificial Intelligence

Sharing and joint processing of camera feeds and sensor measurements, known as Cooperative Perception (CP), has emerged as a new technique to achieve higher perception qualities. CP can enhance the safety of Autonomous Vehicles (AVs) where their individual visual perception quality is compromised by adverse weather conditions (haze as foggy weather), low illumination, winding roads, and crowded traffic. To cover the limitations of former methods, in this paper, we propose a novel approach to realize an optimized CP under constrained communications. At the core of our approach is recruiting the best helper from the available list of front vehicles to augment the visual range and enhance the Object Detection (OD) accuracy of the ego vehicle. In this two-step process, we first select the helper vehicles that contribute the most to CP based on their visual range and lowest motion blur. Next, we implement a radio block optimization among the candidate vehicles to further improve communication efficiency. We specifically focus on pedestrian detection as an exemplary scenario. To validate our approach, we used the CARLA simulator to create a dataset of annotated videos for different driving scenarios where pedestrian detection is challenging for an AV with compromised vision. Our results demonstrate the efficacy of our two-step optimization process in improving the overall performance of cooperative perception in challenging scenarios, substantially improving driving safety under adverse conditions. Finally, we note that the networking assumptions are adopted from LTE Release 14 Mode 4 side-link communication, commonly used for Vehicle-to-Vehicle (V2V) communication. Nonetheless, our method is flexible and applicable to arbitrary V2V communications.


The Introspective Agent: Interdependence of Strategy, Physiology, and Sensing for Embodied Agents

Pratt, Sarah, Weihs, Luca, Farhadi, Ali

arXiv.org Artificial Intelligence

The last few years have witnessed substantial progress in the field of embodied AI where artificial agents, mirroring biological counterparts, are now able to learn from interaction to accomplish complex tasks. Despite this success, biological organisms still hold one large advantage over these simulated agents: adaptation. While both living and simulated agents make decisions to achieve goals (strategy), biological organisms have evolved to understand their environment (sensing) and respond to it (physiology). The net gain of these factors depends on the environment, and organisms have adapted accordingly. For example, in a low vision aquatic environment some fish have evolved specific neurons which offer a predictable, but incredibly rapid, strategy to escape from predators. Mammals have lost these reactive systems, but they have a much larger fields of view and brain circuitry capable of understanding many future possibilities. While traditional embodied agents manipulate an environment to best achieve a goal, we argue for an introspective agent, which considers its own abilities in the context of its environment. We show that different environments yield vastly different optimal designs, and increasing long-term planning is often far less beneficial than other improvements, such as increased physical ability. We present these findings to broaden the definition of improvement in embodied AI passed increasingly complex models. Just as in nature, we hope to reframe strategy as one tool, among many, to succeed in an environment. Code is available at: https://github.com/sarahpratt/introspective.


'Ultra-precise' drone sends packages straight to you

Daily Mail - Science & tech

Scientists have created an'ultra-precise' drone that can deliver packages directly to a person, rather than a location. The DelivAir drone uses GPS to navigate to a user's smartphone, updating its destination throughout its flight until it arrives within visual range. While it is currently still a concept, its designers believe that the drone could be used to deliver life-saving medical supplies in the future. Scientists have created an'ultra-precise' drone that can deliver packages directly to a person, rather than a location. The DelivAir drone uses GPS to navigate to a user's smartphone, updating its destination throughout its flight until it arrives within visual range The drone delivery system uses a two-stage routing process.


Alexa controlled drone can fly around home for security

Daily Mail - Science & tech

Do you ever worry that you left the stove on in your home, or that someone has broken in? If so, a new flying drone could put you at ease. The camera-equipped drone, called Aire, can be controlled via a companion smartphone app, allowing people to see what's going on in their homes while they've popped out to work or are busy running an errand. The drone also works with Amazon's Alexa voice assistant to respond to voice-prompted commands, or can fly autonomously and detect anomalies and provide security alerts by recording a livestream of one's home. The camera-equipped drone drone, called Aire, has a jet propulsion system is hidden inside its body by a fabric exterior, making it quiet and safe for home use.


In the U.S., Flying Drones Out of Sight Is Still Out of Mind

IEEE Spectrum Robotics

For years, companies like Amazon have promised that they'll eventually be delivering packages using drones. One problem, at least in the United States: The Federal Aviation Administration's (FAA's) Small UAS Rule doesn't allow drones to be flown outside the visual range of the remote pilot. That pretty much puts drone deliveries on hold. The FAA is, however, exploring how to relax that requirement and has waived it for a couple of companies, one of which is PrecisionHawk, based in Raleigh, North Carolina. But it is working on a system for managing drone flights so that they could be safely conducted outside the operator's visual range.


Design of Emergent and Adaptive Virtual Players in a War RTS Game

Gutiérrez, José A. García, Cotta, Carlos, Fernández-Leiva, Antonio J.

arXiv.org Artificial Intelligence

Basically, in (one-player) war Real Time Strategy (wRTS) games a human player controls, in real time, an army consisting of a number of soldiers and her aim is to destroy the opponent's assets where the opponent is a virtual (i.e., non-human player controlled) player that usually consists of a pre-programmed decision-making script. These scripts have usually associated some well-known problems (e.g., predictability, non-rationality, repetitive behaviors, and sensation of artificial stupidity among others). This paper describes a method for the automatic generation of virtual players that adapt to the player skills; this is done by building initially a model of the player behavior in real time during the game, and further evolving the virtual player via this model in-between two games. The paper also shows preliminary results obtained on a one player wRTS game constructed specifically for experimentation.